We study a Markov decision problem with unknown transition probabilities. We compute the exact Bayesian decision rule and compare it with two approximations. The first is an infinite-history, rational-expectations approximation that assumes that the decision maker knows the transition probabilities. The second is a version of Kreps' anticipated-utility model in which decision makers update using Bayes' law but optimize in a way that is myopic with respect to their updating of probabilities. For several consumption-smoothing examples, the anticipated-utility approximation outperforms the rational expectations approximation. The rational expectations approximation misrepresents the market price of risk. Copyright 2008 by the Economics Department Of The University Of Pennsylvania And Osaka University Institute Of Social And Economic Research Association.
展开▼